Regularization Methods for Additive Models
نویسندگان
چکیده
This paper tackles the problem of model complexity in the context of additive models. Several methods have been proposed to estimate smoothing parameters, as well as to perform variable selection. Nevertheless, these procedures are inefficient or computationally expensive in high dimension. Also, the lasso technique has been adapted to additive models, however its experimental performance has not been analyzed. We propose a modified lasso for additive models, improving variable selection. A benchmark is also developed, to examine its practical behavior, comparing it with forward selection. Our simulation studies suggest ability to carry out model selection of the proposed method. The lasso technique shows up better than forward in the most complex situations. The computing time of modified lasso is considerably smaller since it does not depend on the number of relevant variables.
منابع مشابه
Sparse Regularization for High Dimensional Additive Models
We study the behavior of the l1 type of regularization for high dimensional additive models. Our results suggest remarkable similarities and differences between linear regression and additive models in high dimensional settings. In particular, our analysis indicates that, unlike in linear regression, l1 regularization does not yield optimal estimation for additive models of high dimensionality....
متن کاملOn High Dimensional Post-Regularization Prediction Intervals
This paper considers the construction of prediction intervals for future observations in high dimensional regression models. We propose a new approach to evaluate the uncertainty for estimating the mean parameter based on the widely-used penalization/regularization methods. The proposed method is then applied to construct prediction intervals for sparse linear models as well as sparse additive ...
متن کاملPenalized estimation in additive varying coefficient models using grouped regularization
Additive varying coefficient models are a natural extension of multiple linear regression models, allowing the regression coefficients to be functions of other variables. Therefore these models are more flexible to model more complex dependencies in data structures. In this paper we consider the problem of selecting in an automatic way the significant variables among a large set of variables, w...
متن کاملSmola The Entropy Regularization Information Criterion
Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general li...
متن کاملThe Entropy Regularization Information Criterion
Effective methods of capacity control via uniform convergence bounds for function expansions have been largely limited to Support Vector machines, where good bounds are obtainable by the entropy number approach. We extend these methods to systems with expansions in terms of arbitrary (parametrized) basis functions and a wide range of regularization methods covering the whole range of general li...
متن کاملConvex-constrained Sparse Additive Modeling and Its Extensions
Sparse additive modeling is a class of effective methods for performing high-dimensional nonparametric regression. In this work we show how shape constraints such as convexity/concavity and their extensions, can be integrated into additive models. The proposed sparse difference of convex additive models (SDCAM) can estimate most continuous functions without any a priori smoothness assumption. M...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003